Goto

Collaborating Authors

 water consumption


Agentic AI Sustainability Assessment for Supply Chain Document Insights

Gosmar, Diego, Pallotta, Anna Chiara, Zenezini, Giovanni

arXiv.org Artificial Intelligence

This paper presents a comprehensive sustainability assessment framework for document intelligence within supply chain operations, centered on agentic artificial intelligence (AI). We address the dual objective of improving automation efficiency while providing measurable environmental performance in document-intensive workflows. The research compares three scenarios: fully manual (human-only), AI-assisted (human-in-the-loop, HITL), and an advanced multi-agent agentic AI workflow leveraging parsers and verifiers. Empirical results show that AI-assisted HITL and agentic AI scenarios achieve reductions of up to 70-90% in energy consumption, 90-97% in carbon dioxide emissions, and 89-98% in water usage compared to manual processes. Notably, full agentic configurations, combining advanced reasoning (thinking mode) and multi-agent validation, achieve substantial sustainability gains over human-only approaches, even when resource usage increases slightly versus simpler AI-assisted solutions. The framework integrates performance, energy, and emission indicators into a unified ESG-oriented methodology for assessing and governing AI-enabled supply chain solutions. The paper includes a complete replicability use case demonstrating the methodology's application to real-world document extraction tasks.


Predicting Household Water Consumption Using Satellite and Street View Images in Two Indian Cities

Wang, Qiao, George, Joseph

arXiv.org Artificial Intelligence

Monitoring household water use in rapidly urbanizing regions is hampered by costly, time-intensive enumeration methods and surveys. We investigate whether publicly available imagery-satellite tiles, Google Street View (GSV) segmentation-and simple geospatial covariates (nightlight intensity, population density) can be utilized to predict household water consumption in Hubballi-Dharwad, India. We compare four approaches: survey features (benchmark), CNN embeddings (satellite, GSV, combined), and GSV semantic maps with auxiliary data. Under an ordinal classification framework, GSV segmentation plus remote-sensing covariates achieves 0.55 accuracy for water use, approaching survey-based models (0.59 accuracy). Error analysis shows high precision at extremes of the household water consumption distribution, but confusion among middle classes is due to overlapping visual proxies. We also compare and contrast our estimates for household water consumption to that of household subjective income. Our findings demonstrate that open-access imagery, coupled with minimal geospatial data, offers a promising alternative to obtaining reliable household water consumption estimates using surveys in urban analytics.


Data centers consume massive amounts of water – companies rarely tell the public exactly how much

AIHub

As demand for artificial intelligence technology boosts construction and proposed construction of data centers around the world, those computers require not just electricity and land, but also a significant amount of water. Data centers use water directly, with cooling water pumped through pipes in and around the computer equipment. They also use water indirectly, through the water required to produce the electricity to power the facility. The amount of water used to produce electricity increases dramatically when the source is fossil fuels compared with solar or wind. A 2024 report from the Lawrence Berkeley National Laboratory estimated that in 2023, U.S. data centers consumed 17 billion gallons (64 billion liters) of water directly through cooling, and projects that by 2028, those figures could double - or even quadruple.


Measuring the environmental impact of delivering AI at Google Scale

Elsworth, Cooper, Huang, Keguo, Patterson, David, Schneider, Ian, Sedivy, Robert, Goodman, Savannah, Townsend, Ben, Ranganathan, Parthasarathy, Dean, Jeff, Vahdat, Amin, Gomes, Ben, Manyika, James

arXiv.org Artificial Intelligence

The transformative power of AI is undeniable - but as user adoption accelerates, so does the need to understand and mitigate the environmental impact of AI serving. However, no studies have measured AI serving environmental metrics in a production environment. This paper addresses this gap by proposing and executing a comprehensive methodology for measuring the energy usage, carbon emissions, and water consumption of AI inference workloads in a large-scale, AI production environment. Our approach accounts for the full stack of AI serving infrastructure - including active AI accelerator power, host system energy, idle machine capacity, and data center energy overhead. Through detailed instrumentation of Google's AI infrastructure for serving the Gemini AI assistant, we find the median Gemini Apps text prompt consumes 0.24 Wh of energy - a figure substantially lower than many public estimates. We also show that Google's software efficiency efforts and clean energy procurement have driven a 33x reduction in energy consumption and a 44x reduction in carbon footprint for the median Gemini Apps text prompt over one year. We identify that the median Gemini Apps text prompt uses less energy than watching nine seconds of television (0.24 Wh) and consumes the equivalent of five drops of water (0.26 mL). While these impacts are low compared to other daily activities, reducing the environmental impact of AI serving continues to warrant important attention. Towards this objective, we propose that a comprehensive measurement of AI serving environmental metrics is critical for accurately comparing models, and to properly incentivize efficiency gains across the full AI serving stack.


Towards Sustainability Model Cards

Jouneaux, Gwendal, Cabot, Jordi

arXiv.org Artificial Intelligence

The growth of machine learning (ML) models and associated datasets triggers a consequent dramatic increase in energy costs for the use and training of these models. In the current context of environmental awareness and global sustainability concerns involving ICT, Green AI is becoming an important research topic. Initiatives like the AI Energy Score Ratings are a good example. Nevertheless, these benchmarking attempts are still to be integrated with existing work on Quality Models and Service-Level Agreements common in other, more mature, ICT subfields. This limits the (automatic) analysis of this model energy descriptions and their use in (semi)automatic model comparison, selection, and certification processes. We aim to leverage the concept of quality models and merge it with existing ML model reporting initiatives and Green/Frugal AI proposals to formalize a Sustainable Quality Model for AI/ML models. As a first step, we propose a new Domain-Specific Language to precisely define the sustainability aspects of an ML model (including the energy costs for its different tasks). This information can then be exported as an extended version of the well-known Model Cards initiative while, at the same time, being formal enough to be input of any other model description automatic process.


Not All Water Consumption Is Equal: A Water Stress Weighted Metric for Sustainable Computing

Wu, Yanran, Hua, Inez, Ding, Yi

arXiv.org Artificial Intelligence

Water consumption is an increasingly critical dimension of computing sustainability, especially as AI workloads rapidly scale. However, current water impact assessment often overlooks where and when water stress is more severe. To fill in this gap, we present SCARF, the first general framework that evaluates water impact of computing by factoring in both spatial and temporal variations in water stress. SCARF calculates an Adjusted Water Impact (AWI) metric that considers both consumption volume and local water stress over time. Through three case studies on LLM serving, datacenters, and semiconductor fabrication plants, we show the hidden opportunities for reducing water impact by optimizing location and time choices, paving the way for water-sustainable computing. The code is available at https://github.com/jojacola/SCARF.


Making AI Less 'Thirsty'

Communications of the ACM

Artificial intelligence (AI) has enabled remarkable breakthroughs in numerous areas of critical importance, including tackling global challenges such as climate change. On the other hand, many AI models, especially large generative ones like GPT-4, are trained and deployed on energy-hungry servers in warehouse-scale datacenters, accelerating the datacenter energy consumption at an unprecedented rate.25 As a result, AI's carbon footprint has been undergoing scrutiny, driving the recent progress in AI carbon efficiency.24,31 However, AI's water footprint--many millions of liters of freshwater consumed for cooling the servers and for electricity generation--has largely remained under the radar and keeps escalating. If not properly addressed, AI's water footprint can potentially become a major roadblock to sustainability and create social conflicts, as freshwater resources suitable for human use are extremely limited and unevenly distributed.


The Dark Side of Digital Twins: Adversarial Attacks on AI-Driven Water Forecasting

Homaei, Mohammadhossein, Morales, Victor Gonzalez, Mogollon-Gutierrez, Oscar, Caro, Andres

arXiv.org Artificial Intelligence

Digital twins (DTs) are improving water distribution systems by using real-time data, analytics, and prediction models to optimize operations. This paper presents a DT platform designed for a Spanish water supply network, utilizing Long Short-Term Memory (LSTM) networks to predict water consumption. However, machine learning models are vulnerable to adversarial attacks, such as the Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD). These attacks manipulate critical model parameters, injecting subtle distortions that degrade forecasting accuracy. To further exploit these vulnerabilities, we introduce a Learning Automata (LA) and Random LA-based approach that dynamically adjusts perturbations, making adversarial attacks more difficult to detect. Experimental results show that this approach significantly impacts prediction reliability, causing the Mean Absolute Percentage Error (MAPE) to rise from 26% to over 35%. Moreover, adaptive attack strategies amplify this effect, highlighting cybersecurity risks in AI-driven DTs. These findings emphasize the urgent need for robust defenses, including adversarial training, anomaly detection, and secure data pipelines.


Revealed: Big tech's new datacentres will take water from the world's driest areas

The Guardian

Amazon, Microsoft and Google are operating datacentres that use vast amounts of water in some of the world's driest areas and are building many more, an investigation by SourceMaterial and the Guardian has found. With Donald Trump pledging to support them, the three technology giants are planning hundreds of datacentres in the US and across the globe, with a potentially huge impact on populations already living with water scarcity. "The question of water is going to become crucial," said Lorena Jaume-Palasí, founder of the Ethical Tech Society. "Resilience from a resource perspective is going to be very difficult for those communities." Efforts by Amazon, the world's largest online retailer, to mitigate its water use have sparked opposition from inside the company, SourceMaterial's investigation found, with one of its own sustainability experts warning that its plans are "not ethical".


Holistically Evaluating the Environmental Impact of Creating Language Models

Morrison, Jacob, Na, Clara, Fernandez, Jared, Dettmers, Tim, Strubell, Emma, Dodge, Jesse

arXiv.org Artificial Intelligence

As the performance of artificial intelligence systems has dramatically increased, so too has the environmental impact of creating these systems. While many model developers release estimates of the power consumption and carbon emissions from the final training runs for their latest models, there is comparatively little transparency into the impact of model development, hardware manufacturing, and total water usage throughout. In this work, we estimate the real-world environmental impact of developing a series of language models, ranging from 20 million to 13 billion active parameters, trained on up to 5.6 trillion tokens each. When accounting for hardware manufacturing, model development, and our final training runs, we find that our series of models released 493 metric tons of carbon emissions, equivalent to powering about 98 homes in the United States for one year, and consumed 2.769 million liters of water, equivalent to about 24.5 years of water usage by a person in the United States, even though our data center is extremely water-efficient. We measure and report the environmental impact of our model development; to the best of our knowledge we are the first to do so for LLMs, and we find that model development, the impact of which is generally not disclosed by most model developers, amounted to 50% of that of training. By looking at detailed time series data for power consumption, we also find that power usage throughout training is not consistent, fluctuating between 15% and 85% of our hardware's maximum power draw, with negative implications for grid-scale planning as demand continues to grow. We close with a discussion on the continued difficulty of estimating the environmental impact of AI systems, and key takeaways for model developers and the public at large. In recent years, the field of artificial intelligence has progressed at an unprecedented pace, driven in large part by the development and deployment of large language and multimodal models.